我们考虑涉及一组代理的在线估计问题。每个代理都可以访问(个人)流程,该过程从实数分布中生成样本,并试图估算其平均值。我们研究了某些分布具有相同均值的情况,并且允许代理人积极查询其他代理商的信息。目的是设计一种算法,该算法使每个代理都能够通过与其他代理商进行沟通来改善其平均估计。平均值的均值和分布数量尚不清楚,这使得任务是非平凡的。我们介绍了一种新颖的协作策略,以解决这个在线个性化的平均估计问题。我们分析其时间复杂性,并引入在数值实验中享有良好性能的变体。我们还将我们的方法扩展到了具有相似手段的代理商群体寻求估算其群集的平均值的环境。
translated by 谷歌翻译
在本文中,我们研究了差异化的私人经验风险最小化(DP-erm)。已经表明,随着尺寸的增加,DP-MER的(最坏的)效用会减小。这是私下学习大型机器学习模型的主要障碍。在高维度中,某些模型的参数通常比其他参数更多的信息是常见的。为了利用这一点,我们提出了一个差异化的私有贪婪坐标下降(DP-GCD)算法。在每次迭代中,DP-GCD私人沿梯度(大约)最大条目执行坐标梯度步骤。从理论上讲,DP-GCD可以通过利用问题解决方案的结构特性(例如稀疏性或准方面的)来改善实用性,并在早期迭代中取得非常快速的进展。然后,我们在合成数据集和真实数据集上以数值说明。最后,我们描述了未来工作的有前途的方向。
translated by 谷歌翻译
能够收集用户声音的强大个人设备的广泛开设了建立语音识别系统(ASR)的扬声器或参与ASR的协作学习的机会。在这两种情况下,可以构建个性化的声学模型(AM),即微调AM与特定扬声器数据。自然出现的问题是,个性化声学模型的传播是否可以泄漏个人信息。在本文中,我们表明可以通过仅利用本地适应该扬声器的神经声学模型的重量矩阵变化来检索扬声器的性别,而且还可以检索扬声器的性别,而且还可以检索他的身份。顺便提及,我们观察到在语音处理的背景下可以有助于解释深度神经网络的现象。在使用中间层时,只能使用第一层和扬声器验证几乎肯定地识别性别。我们对具有HMM / TDNN模型的TED-Lium 3数据集的实验研究显示了性别检测的95%,并且通过仅利用可以交换的个性化模型的权重,扬声器验证任务的相同错误率为9.07%而不是用户数据。
translated by 谷歌翻译
本文调查了在自动语音识别(ASR)中有效地从个性化扬声器适应的神经网络声学模型(AMS)中检索扬声器信息。这个问题在联合学习的ASR声学模型的上下文中尤为重要,其中基于从多个客户端接收的更新在服务器上学习了全局模型。我们提出了一种方法来根据所谓指示器数据集的神经网络足迹分析神经网络AMS中的信息。使用此方法,我们开发了两个攻击模型,该模型旨在从更新的个性化模型推断扬声器身份,而无需访问实际用户的语音数据。TED-Lium 3语料库的实验表明,所提出的方法非常有效,可以提供1-2%的相同错误率(eer)。
translated by 谷歌翻译
Partial differential equations (PDEs) are important tools to model physical systems, and including them into machine learning models is an important way of incorporating physical knowledge. Given any system of linear PDEs with constant coefficients, we propose a family of Gaussian process (GP) priors, which we call EPGP, such that all realizations are exact solutions of this system. We apply the Ehrenpreis-Palamodov fundamental principle, which works like a non-linear Fourier transform, to construct GP kernels mirroring standard spectral methods for GPs. Our approach can infer probable solutions of linear PDE systems from any data such as noisy measurements, or initial and boundary conditions. Constructing EPGP-priors is algorithmic, generally applicable, and comes with a sparse version (S-EPGP) that learns the relevant spectral frequencies and works better for big data sets. We demonstrate our approach on three families of systems of PDE, the heat equation, wave equation, and Maxwell's equations, where we improve upon the state of the art in computation time and precision, in some experiments by several orders of magnitude.
translated by 谷歌翻译
Unbiased learning to rank (ULTR) studies the problem of mitigating various biases from implicit user feedback data such as clicks, and has been receiving considerable attention recently. A popular ULTR approach for real-world applications uses a two-tower architecture, where click modeling is factorized into a relevance tower with regular input features, and a bias tower with bias-relevant inputs such as the position of a document. A successful factorization will allow the relevance tower to be exempt from biases. In this work, we identify a critical issue that existing ULTR methods ignored - the bias tower can be confounded with the relevance tower via the underlying true relevance. In particular, the positions were determined by the logging policy, i.e., the previous production model, which would possess relevance information. We give both theoretical analysis and empirical results to show the negative effects on relevance tower due to such a correlation. We then propose three methods to mitigate the negative confounding effects by better disentangling relevance and bias. Empirical results on both controlled public datasets and a large-scale industry dataset show the effectiveness of the proposed approaches.
translated by 谷歌翻译
G-Enum histograms are a new fast and fully automated method for irregular histogram construction. By framing histogram construction as a density estimation problem and its automation as a model selection task, these histograms leverage the Minimum Description Length principle (MDL) to derive two different model selection criteria. Several proven theoretical results about these criteria give insights about their asymptotic behavior and are used to speed up their optimisation. These insights, combined to a greedy search heuristic, are used to construct histograms in linearithmic time rather than the polynomial time incurred by previous works. The capabilities of the proposed MDL density estimation method are illustrated with reference to other fully automated methods in the literature, both on synthetic and large real-world data sets.
translated by 谷歌翻译
Neural Radiance Fields (NeRFs) are emerging as a ubiquitous scene representation that allows for novel view synthesis. Increasingly, NeRFs will be shareable with other people. Before sharing a NeRF, though, it might be desirable to remove personal information or unsightly objects. Such removal is not easily achieved with the current NeRF editing frameworks. We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence. Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask. Our algorithm is underpinned by a confidence based view selection procedure. It chooses which of the individual 2D inpainted images to use in the creation of the NeRF, so that the resulting inpainted NeRF is 3D consistent. We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner. We validate our approach using a new and still-challenging dataset for the task of NeRF inpainting.
translated by 谷歌翻译
Co-clustering is a class of unsupervised data analysis techniques that extract the existing underlying dependency structure between the instances and variables of a data table as homogeneous blocks. Most of those techniques are limited to variables of the same type. In this paper, we propose a mixed data co-clustering method based on a two-step methodology. In the first step, all the variables are binarized according to a number of bins chosen by the analyst, by equal frequency discretization in the numerical case, or keeping the most frequent values in the categorical case. The second step applies a co-clustering to the instances and the binary variables, leading to groups of instances and groups of variable parts. We apply this methodology on several data sets and compare with the results of a Multiple Correspondence Analysis applied to the same data.
translated by 谷歌翻译
Co-clustering is a data mining technique used to extract the underlying block structure between the rows and columns of a data matrix. Many approaches have been studied and have shown their capacity to extract such structures in continuous, binary or contingency tables. However, very little work has been done to perform co-clustering on mixed type data. In this article, we extend the latent block models based co-clustering to the case of mixed data (continuous and binary variables). We then evaluate the effectiveness of the proposed approach on simulated data and we discuss its advantages and potential limits.
translated by 谷歌翻译